86 research outputs found

    The mei-friend Web Application: Editing MEI in the Browser

    Get PDF
    mei-friend is a ‘last mile’ editor for MEI-encodings intended to alleviate the common task of cleaning up encodings generated via optical music recognition, or via conversion from other formats. The open-source tool, building on the earlier mei-tools-atom codebase, was first presented to the MEI community at MEC ‘21, and has received more than 500 downloads, demonstrating the demand for interactive MEI editing. Among the feedback we received at this presentation were requests to port the tool, which was implemented as a plugin package for the Atom text editor, to work in a generic Web browser environment; while Atom’s architecture is already built on a Chromium (browser) back-end, it is somewhat slow to use, and the installation process and requirement for a separate application may be off-putting to less technically-minded users. Here, we are pleased to present mei-friend in its new guise as a full-featured, cross-browser compatible Web application, with optimized performance and an extended set of features

    Alleviating the Last Mile of Encoding: The mei-friend Package for the Atom Text Editor

    Get PDF
    MEC 2021 BEST PAPER AWARD. Though MEI is widely used in music informatics and digital musicology research, the relative lack of authoring software and the specialised nature of its community have limited the availability of high-quality MEI encodings. Translating to MEI from other encoding formats, or generating MEI via optical music recognition processes, is thus a typical component of many MEI-project workflows. However, automated translations rarely achieve results of sufficient quality, a problem well-known in the community and documented in the literature. Final correction and validation by hand is therefore a common requirement. In this paper, we present meifriend, an extension to the Atom text editor, which aims to relieve the degree of manual labour required in this process. The tool facilitates most common MEI editing tasks including the insertion and manipulation of MEI elements, makes the encoded score visible and interactively accessible to the user, and provides quality-of-life conveniences including keyboard shortcuts for editing functions as well as intelligent navigation of the MEI hierarchy. We detail the tool’s implementation, describe its functionalities, and evaluate its responsiveness during the editing process, even when editing very large MEI files

    Read/Write Digital Libraries for Musicology

    Get PDF
    The Web and other digital technologies have democratised music creation, reception, and analysis, putting music in the hands, ears, and minds of billions of users. Music digital libraries typically focus on an essential subset of this deluge—commercial and academic publications, and historical materials—but neglect to incorporate contributions by scholars, performers, and enthusiasts, such as annotations or performed interpretations of these artifacts, despite their potential utility for many types of users. In this paper we consider means by which digital libraries for musicology may incorporate such contributions into their collections, adhering to principles of FAIR data management and respecting contributor rights as outlined in the EU’s General Data Protection Regulation. We present an overview of centralised and decentralised approaches to this problem, and propose hybrid solutions in which contributions reside in a) user-controlled personal online datastores, b) decentralised file storage, and c) are published and aggregated into digital library collections. We outline the implementation of these ideas using Solid, a Web decentralisation project building on W3C standard technologies to facilitate publication and control over Linked Data. We demonstrate the feasibility of this approach by implementing prototypes supporting two types of contribution: Web Annotations describing or analysing musical elements in score encodings and music recordings; and, music performances and associated metadata supporting performance analyses across many renditions of a given piece. Finally, we situate these ideas within a wider conception of enriched, decentralised, and interconnected online music repositories

    Rehearsal encodings with a social life

    Get PDF
    MEI-encoded scores are versatile music information resources representing musical meaning within a finely addressable XML structure. The Verovio MEI engraver reflects the hierarchy and identifiers of these encodings into its generated SVG output, supporting presentation of digital scores as richly interactive Web applications. Typical MEI workflows initially involve scholarly or editorial activities to generate an encoding, followed by its subsequent publication and use. Further iterations may derive new encodings from precedents; but the suitability of MEI to interactive applications also offers more dynamic alternatives, in which the encoding provides a framework connecting data that is generated and consumed simultaneously in real-time. Exemplars include compositions which self-modify according to external contextual parameters, such as the current weather at time of performance, or which are assembled by user-imposed external semantics, such as a performer’s explicit choices and implicit performative success at playing musical triggers within a composition. When captured, these external semantic signals (interlinked with the MEI structure) themselves encode the evolution of a dynamic score during a particular performance. They have value beyond the immediate performance context; when archived, they allow audiences to revisit and compare different performances

    Notes on the Music: A social data infrastructure for music annotation

    Get PDF
    Beside transmitting musical meaning from composer to reader, symbolic music notation affords the dynamic addition of layers of information by annotation. This allows music scores to serve as rudimentary communication frameworks. Music encodings bring these affordances into the digital realm; though annotations may be represented as digital pen-strokes upon a score image, they must be captured using machine-interpretable semantics to fully benefit from this transformation. This is challenging, as annota- tors’ requirements are heterogeneous, varying both across different types of user (e.g., musician, scholar) and within these groups, de- pending on the specific use-case. A hypothetical all-encompassing tool catering to every conceivable annotation type, even if it were possible to build, would vastly complicate user interaction. This additional complexity would significantly increase cognitive load and impair usability, particularly in dynamic real-time usage con- texts, e.g., live annotation during music rehearsal or performance. To address this challenge, we present a social data infrastructure that facilitates the creation of use-case specific annotation toolkits. Its components include a selectable-score module that supports customisable click-and-drag selection of score elements (e.g., notes, measures, directives); the Web Annotations data model, extended to support the creation of custom, Web-addressable annotation types supporting the specification and (re-)use of annotation palettes; and the Music Encoding and Linked Data (MELD) Javascript client library, used to build interfaces that map annotation types to render- ing and interaction handlers. We have extended MELD to support the Solid platform for social Linked Data, allowing annotations to be privately stored in user-controlled Personal Online Datastores (Pods), or selectively shared or published. To demonstrate the feasi- bility of our proposed approach, we present annotation interfaces employing the outlined infrastructure in three distinct use-cases: scholarly communication; music rehearsal; and rating during music listening

    Computational Models of Expressive Music Performance: A Comprehensive and Critical Review

    Get PDF
    Expressive performance is an indispensable part of music making. When playing a piece, expert performers shape various parameters (tempo, timing, dynamics, intonation, articulation, etc.) in ways that are not prescribed by the notated score, in this way producing an expressive rendition that brings out dramatic, affective, and emotional qualities that may engage and affect the listeners. Given the central importance of this skill for many kinds of music, expressive performance has become an important research topic for disciplines like musicology, music psychology, etc. This paper focuses on a specific thread of research: work on computational music performance models. Computational models are attempts at codifying hypotheses about expressive performance in terms of mathematical formulas or computer programs, so that they can be evaluated in systematic and quantitative ways. Such models can serve at least two purposes: they permit us to systematically study certain hypotheses regarding performance; and they can be used as tools to generate automated or semi-automated performances, in artistic or educational contexts. The present article presents an up-to-date overview of the state of the art in this domain. We explore recent trends in the field, such as a strong focus on data-driven (machine learning) approaches; a growing interest in interactive expressive systems, such as conductor simulators and automatic accompaniment systems; and an increased interest in exploring cognitively plausible features and models. We provide an in-depth discussion of several important design choices in such computer models, and discuss a crucial (and still largely unsolved) problem that is hindering systematic progress: the question of how to evaluate such models in scientifically and musically meaningful ways. From all this, we finally derive some research directions that should be pursued with priority, in order to advance the field and our understanding of expressive music performance

    Was kennzeichnet die Interpretation eines guten Musikers? Die integrierte Analyse von Tempo- und LautstÀrkegestaltung und ihre musikpÀdagogischen Anwendungsperspektiven

    Full text link
    Die Bedeutung der Interpretation fĂŒr die Wirkung eines MusikstĂŒcks ist unstreitig. Jeder Musikliebhaber hat diesbezĂŒglich schon drastische Erfahrungen gemacht und erlebt, wie ein und dieselbe Komposition je nach Interpretation zu ergreifender Musik werden oder aber zum langweiligen, hölzernen Abspulen von Tönen verkommen kann. Folglich sind auch die Fragen einer ĂŒberzeugenden musikalischen Interpretation integraler Bestandteil des Instrumentalunterrichts und die intensive Arbeit in diesem Bereich gehört zu den Hauptaufgaben eines Lehrers. Aus wissenschaftlicher Sicht besteht hier ein ungelöstes RĂ€tsel: Denn klar ist nur, dass die so fundamentale Wirkung durch ein Zusammenspiel von Feinheiten insbesondere der LautstĂ€rke-, Tempo- und Artikulationsgestaltung zustande kommt. Aber welche Konstellationen dieser Feinheiten sind es denn, die zu einer ĂŒberzeugenden Interpretation fĂŒhren? Welche GesetzmĂ€ĂŸigkeiten existieren hier? [...] [Es soll] ein neues Verfahren der Interpretationsforschung [entwickelt werden], um zu erproben, welches eine musikalisch sinnvolle Reduktion der stets anfallenden DatenfĂŒlle leistet, eine integrierte Betrachtung von LautstĂ€rke- und Tempogestaltung ermöglicht und tauglich fĂŒr die Anwendung in der musikalischen und musikpĂ€dagogischen Praxis ist. Hieraus folgt insbesondere die Forderung nach einer sinnfĂ€lligen grafischen Darstellung der Ergebnisse. (DIPF/Orig.
    • 

    corecore